9 research outputs found

    ViCLEVR: A Visual Reasoning Dataset and Hybrid Multimodal Fusion Model for Visual Question Answering in Vietnamese

    Full text link
    In recent years, Visual Question Answering (VQA) has gained significant attention for its diverse applications, including intelligent car assistance, aiding visually impaired individuals, and document image information retrieval using natural language queries. VQA requires effective integration of information from questions and images to generate accurate answers. Neural models for VQA have made remarkable progress on large-scale datasets, with a primary focus on resource-rich languages like English. To address this, we introduce the ViCLEVR dataset, a pioneering collection for evaluating various visual reasoning capabilities in Vietnamese while mitigating biases. The dataset comprises over 26,000 images and 30,000 question-answer pairs (QAs), each question annotated to specify the type of reasoning involved. Leveraging this dataset, we conduct a comprehensive analysis of contemporary visual reasoning systems, offering valuable insights into their strengths and limitations. Furthermore, we present PhoVIT, a comprehensive multimodal fusion that identifies objects in images based on questions. The architecture effectively employs transformers to enable simultaneous reasoning over textual and visual data, merging both modalities at an early model stage. The experimental findings demonstrate that our proposed model achieves state-of-the-art performance across four evaluation metrics. The accompanying code and dataset have been made publicly accessible at \url{https://github.com/kvt0012/ViCLEVR}. This provision seeks to stimulate advancements within the research community, fostering the development of more multimodal fusion algorithms, specifically tailored to address the nuances of low-resource languages, exemplified by Vietnamese.Comment: A pre-print version and submitted to journa

    Melanotic neuroectodermal tumor of infancy

    No full text
    Introduction: Melanotic neuroectodermal tumor of infancy (MNTI) is a rare, rapidly growing pigmented neoplasm of neural crest origin generally arising in infants during the first year of life. Case: We report a 15-month old male who presented with a 2-month history of a rapidly growing mass in the anterior. A biopsy showed melanotic neuroectodermal tumor, and complete resection with negative margins was subsequently achieved. The patient is in remission at 11 months from surgery. Conclusion: Due to its rapid growth potential and locally destructive behaviour, early diagnosis is extremely important to limit local expansion. The treatment of choice for melanotic neuroectodermal tumor of infancy (MNTI) is surgical excision. Keywords: Melanotic neuroectodermal tumor, Infanc

    Human versus equine intramuscular antitoxin, with or without human intrathecal antitoxin, for the treatment of adults with tetanus: a 2 × 2 factorial randomised controlled trial

    No full text
    Background Intramuscular antitoxin is recommended in tetanus treatment, but there are few data comparing human and equine preparations. Tetanus toxin acts within the CNS, where there is limited penetration of peripherally administered antitoxin; thus, intrathecal antitoxin administration might improve clinical outcomes compared with intramuscular injection. Methods In a 2  × 2 factorial trial, all patients aged 16 years or older with a clinical diagnosis of generalised tetanus admitted to the intensive care unit of the Hospital for Tropical Diseases, Ho Chi Minh City, Vietnam, were eligible for study entry. Participants were randomly assigned first to 3000 IU human or 21 000 U equine intramuscular antitoxin, then to either 500 IU intrathecal human antitoxin or sham procedure. Interventions were delivered by independent clinicians, with attending clinicians and study staff masked to treatment allocations. The primary outcome was requirement for mechanical ventilation. The analysis was done in the intention-to-treat population. The study is registered at ClinicalTrials.gov, NCT02999815; recruitment is completed. Findings 272 adults were randomly assigned to interventions between Jan 8, 2017, and Sept 29, 2019, and followed up until May, 2020. In the intrathecal allocation, 136 individuals were randomly assigned to sham procedure and 136 to antitoxin; in the intramuscular allocation, 109 individuals were randomly assigned to equine antitoxin and 109 to human antitoxin. 54 patients received antitoxin at a previous hospital, excluding them from the intramuscular antitoxin groups. Mechanical ventilation was given to 56 (43%) of 130 patients allocated to intrathecal antitoxin and 65 (50%) of 131 allocated to sham procedure (relative risk [RR] 0·87, 95% CI 0·66–1·13; p=0·29). For the intramuscular allocation, 48 (45%) of 107 patients allocated to human antitoxin received mechanical ventilation compared with 48 (44%) of 108 patients allocated to equine antitoxin (RR 1·01, 95% CI 0·75–1·36, p=0·95). No clinically relevant difference in adverse events was reported. 22 (16%) of 136 individuals allocated to the intrathecal group and 22 (11%) of 136 allocated to the sham procedure experienced adverse events related or possibly related to the intervention. 16 (15%) of 108 individuals allocated to equine intramuscular antitoxin and 17 (16%) of 109 allocated to human antitoxin experienced adverse events related or possibly related to the intervention. There were no intervention-related deaths. Interpretation We found no advantage of intramuscular human antitoxin over intramuscular equine antitoxin in tetanus treatment. Intrathecal antitoxin administration was safe, but did not provide overall benefit in addition to intramuscular antitoxin administration

    Clinical benefit of AI-assisted lung ultrasound in a resource-limited intensive care unit

    No full text

    Beyond the imitation game: Quantifying and extrapolating the capabilities of language models

    No full text
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting

    Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

    Get PDF
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting.Comment: 27 pages, 17 figures + references and appendices, repo: https://github.com/google/BIG-benc
    corecore